148 research outputs found

    An ica algorithm for analyzing multiple data sets

    Get PDF
    In this paper we derive an independent-component analysis (ICA) method for analyzing two or more data sets simultaneously. Our model permits there to be components individual to the various data sets, and others that are common to all the sets. We explore the assumed time autocorrelation of independent signal components and base our algorithm on prediction analysis. We illustrate the algorithm using a simple image separation example. Our aim is to apply this method to functional brain mapping using functional magnetic resonance imaging (fMRI). 1

    A Spatially Robust ICA Algorithm for Multiple fMRI Data Sets

    Get PDF
    In this paper we derive an independent-component analysis (ICA) method for analyzing two or more data sets simultaneously. Our model extracts independent components common to all data sets and independent data-set-specific components. We use time-delayed autocorrelations to obtain independent signal components and base our algorithm on prediction analysis. We applied this method to functional brain mapping using functional magnetic resonance imaging (fMRI). The results of our 3-subject analysis demonstrate the robustness of the algorithm to the spatial misalignment intrinsic in multiple-subject fMRI data sets. 1

    Reproducibility of BOLD-based functional MRI obtained at 4

    Get PDF
    Abstract: The reproducibility of activation patterns in the whole brain obtained by functional magnetic resonance imaging (fMRI) experiments at 4 Tesla was studied with a simple finger-opposition task. Six subjects performed three runs in one session, and each run was analyzed separately with the t-test as a univariate method and Fisher's linear discriminant analysis as a multivariate method. Detrending with a first-and third-order polynomial as well as logarithmic transformation as preprocessing steps for the t-test were tested for their impact on reproducibility. Reproducibility across the whole brain was studied by using scatter plots of statistical values and calculating the correlation coefficient between pairs of activation maps. In order to compare reproducibility of ''activated'' voxels across runs, subjects and models, 2% of all voxels in the brain with the highest statistical values were classified as activated. The analysis of reproducible activated voxels was performed for the whole brain and within regions of interest. We found considerable variability in reproducibility across subjects, regions of interest, and analysis methods. The t-test on the linear detrended data yielded better reproducibility than Fisher's linear discriminant analysis, and therefore seems to be a robust although conservative method. Preliminary data indicate that these modeling results may be reversed by preprocessing to reduce respiratory and cardiac physiological noise effects. The reproducibility of both the position and number of activated voxels in the sensorimotor cortex was highest, while that of the supplementary motor area was much lower, with reproducibility of the cerebellum falling in between the other two areas

    Comparison of quality control methods for automated diffusion tensor imaging analysis pipelines

    Get PDF
    © 2019 Haddad et al. The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data

    The Canadian Dementia Imaging Protocol: Harmonization validity for morphometry measurements

    Get PDF
    © 2019 The Authors The harmonized Canadian Dementia Imaging Protocol (CDIP) has been developed to suit the needs of a number of co-occurring Canadian studies collecting data on brain changes across adulthood and neurodegeneration. In this study, we verify the impact of CDIP parameters compliance on total brain volume variance using 86 scans of the same individual acquired on various scanners. Data included planned data collection acquired within the Consortium pour l\u27identification précoce de la maladie Alzheimer - Québec (CIMA-Q) and Canadian Consortium on Neurodegeneration in Aging (CCNA) studies, as well as opportunistic data collection from various protocols. For images acquired from Philips scanners, lower variance in brain volumes were observed when the stated CDIP resolution was set. For images acquired from GE scanners, lower variance in brain volumes were noticed when TE/TR values were within 5% of the CDIP protocol, compared to values farther from that criteria. Together, these results suggest that a harmonized protocol like the CDIP may help to reduce neuromorphometric measurement variability in multi-centric studies

    Multisite Comparison of MRI Defacing Software Across Multiple Cohorts

    Get PDF
    With improvements to both scan quality and facial recognition software, there is an increased risk of participants being identified by a 3D render of their structural neuroimaging scans, even when all other personal information has been removed. To prevent this, facial features should be removed before data are shared or openly released, but while there are several publicly available software algorithms to do this, there has been no comprehensive review of their accuracy within the general population. To address this, we tested multiple algorithms on 300 scans from three neuroscience research projects, funded in part by the Ontario Brain Institute, to cover a wide range of ages (3–85 years) and multiple patient cohorts. While skull stripping is more thorough at removing identifiable features, we focused mainly on defacing software, as skull stripping also removes potentially useful information, which may be required for future analyses. We tested six publicly available algorithms (afni_refacer, deepdefacer, mri_deface, mridefacer, pydeface, quickshear), with one skull stripper (FreeSurfer) included for comparison. Accuracy was measured through a pass/fail system with two criteria; one, that all facial features had been removed and two, that no brain tissue was removed in the process. A subset of defaced scans were also run through several preprocessing pipelines to ensure that none of the algorithms would alter the resulting outputs. We found that the success rates varied strongly between defacers, with afni_refacer (89%) and pydeface (83%) having the highest rates, overall. In both cases, the primary source of failure came from a single dataset that the defacer appeared to struggle with - the youngest cohort (3–20 years) for afni_refacer and the oldest (44–85 years) for pydeface, demonstrating that defacer performance not only depends on the data provided, but that this effect varies between algorithms. While there were some very minor differences between the preprocessing results for defaced and original scans, none of these were significant and were within the range of variation between using different NIfTI converters, or using raw DICOM files
    • …
    corecore